Goto

Collaborating Authors

 Qinghai Province



Context-Aware Dynamic Chunking for Streaming Tibetan Speech Recognition

Wang, Chao, Cai, Yuqing, Duojie, Renzeng, Zhang, Jin, Liu, Yutong, Tashi, Nyima

arXiv.org Artificial Intelligence

ABSTRACT In this work, we propose a streaming speech recognition framework for Amdo Tibetan, built upon a hybrid CTC/Atten-tion architecture with a context-aware dynamic chunking mechanism. The proposed strategy adaptively adjusts chunk widths based on encoding states, enabling flexible receptive fields, cross-chunk information exchange, and robust adaptation to varying speaking rates, thereby alleviating the context truncation problem of fixed-chunk methods. To further capture the linguistic characteristics of Tibetan, we construct a lexicon grounded in its orthographic principles, providing linguistically motivated modeling units. During decoding, an external language model is integrated to enhance semantic consistency and improve recognition of long sentences. Experimental results show that the proposed framework achieves a word error rate (WER) of 6.23% on the test set, yielding a 48.15% relative improvement over the fixed-chunk baseline, while significantly reducing recognition latency and maintaining performance close to global decoding.


Large Language Models Do Multi-Label Classification Differently

Ma, Marcus, Chochlakis, Georgios, Pandiyan, Niyantha Maruthu, Thomason, Jesse, Narayanan, Shrikanth

arXiv.org Artificial Intelligence

Multi-label classification is prevalent in real-world settings, but the behavior of Large Language Models (LLMs) in this setting is understudied. We investigate how autoregressive LLMs perform multi-label classification, focusing on subjective tasks, by analyzing the output distributions of the models at each label generation step. We find that the initial probability distribution for the first label often does not reflect the eventual final output, even in terms of relative order and find LLMs tend to suppress all but one label at each generation step. We further observe that as model scale increases, their token distributions exhibit lower entropy and higher single-label confidence, but the internal relative ranking of the labels improves. Finetuning methods such as supervised finetuning and reinforcement learning amplify this phenomenon. We introduce the task of distribution alignment for multi-label settings: aligning LLM-derived label distributions with empirical distributions estimated from annotator responses in subjective tasks. We propose both zero-shot and supervised methods which improve both alignment and predictive performance over existing approaches. We find one method -- taking the max probability over all label generation distributions instead of just using the initial probability distribution -- improves both distribution alignment and overall F1 classification without adding any additional computation.


A Review of End-to-End Precipitation Prediction Using Remote Sensing Data: from Divination to Machine Learning

Zeng, Yugong, Wu, Jonathan

arXiv.org Artificial Intelligence

Precipitation prediction has undergone a profound transformation -- from early symbolic and empirical methods rooted in divination and observation, to modern technologies based on atmospheric physics and artificial intelligence. This review traces the historical and technological evolution of precipitation forecasting, presenting a survey about end-to-end precipitation prediction technologies that spans ancient practices, the foundations of meteorological science, the rise of numerical weather prediction (NWP), and the emergence of machine learning (ML) and deep learning (DL) models. We first explore traditional and indigenous forecasting methods, then describe the development of physical modeling and statistical frameworks that underpin contemporary operational forecasting. Particular emphasis is placed on recent advances in neural network-based approaches, including automated deep learning, interpretability-driven design, and hybrid physical-data models. By compositing research across multiple eras and paradigms, this review not only depicts the history of end-to-end precipitation prediction but also outlines future directions in next generation forecasting systems.


Long-Term PM2.5 Forecasting Using a DTW-Enhanced CNN-GRU Model

Naeini, Amirali Ataee, Naeini, Arshia Ataee, Mohammadi, Fatemeh Karami, Ghaffarpasand, Omid

arXiv.org Artificial Intelligence

Reliable long-term forecasting of PM2.5 concentrations is critical for public health early-warning systems, yet existing deep learning approaches struggle to maintain prediction stability beyond 48 hours, especially in cities with sparse monitoring networks. This paper presents a deep learning framework that combines Dynamic Time Warping (DTW) for intelligent station similarity selection with a CNN-GRU architecture to enable extended-horizon PM2.5 forecasting in Isfahan, Iran, a city characterized by complex pollution dynamics and limited monitoring coverage. Unlike existing approaches that rely on computationally intensive transformer models or external simulation tools, our method integrates three key innovations: (i) DTW-based historical sampling to identify similar pollution patterns across peer stations, (ii) a lightweight CNN-GRU architecture augmented with meteorological features, and (iii) a scalable design optimized for sparse networks. Experimental validation using multi-year hourly data from eight monitoring stations demonstrates superior performance compared to state-of-the-art deep learning methods, achieving R2 = 0.91 for 24-hour forecasts. Notably, this is the first study to demonstrate stable 10-day PM2.5 forecasting (R2 = 0.73 at 240 hours) without performance degradation, addressing critical early-warning system requirements. The framework's computational efficiency and independence from external tools make it particularly suitable for deployment in resource-constrained urban environments.


Simple and Robust Forecasting of Spatiotemporally Correlated Small Earth Data with A Tabular Foundation Model

Yang, Yuting, Mei, Gang, Ma, Zhengjing, Xu, Nengxiong, Peng, Jianbing

arXiv.org Artificial Intelligence

Small Earth data are geoscience observations with limited short-term monitoring variability, providing sparse but meaningful measurements, typically exhibiting spatiotemporal correlations. Spatiotemporal forecasting on such data is crucial for understanding geoscientific processes despite their small scale. However, conventional deep learning models for spatiotemporal forecasting requires task-specific training for different scenarios. Foundation models do not need task-specific training, but they often exhibit forecasting bias toward the global mean of the pretraining distribution. Here we propose a simple and robust approach for spatiotemporally correlated small Earth data forecasting. The essential idea is to characterize and quantify spatiotemporal patterns of small Earth data and then utilize tabular foundation models for accurate forecasting across different scenarios. Comparative results across three typical scenarios demonstrate that our forecasting approach achieves superior accuracy compared to the graph deep learning model (T -GCN) and tabular foundation model (TabPFN) in the majority of instances, exhibiting stronger robustness. Keywords: Small Earth data, Spatiotemporal correlations, Tabular foundation model, Forecasting, Deep learning 1. Introduction Small Earth data refers to geoscience time-series observations in which short-term monitoring provides limited informative variation, resulting in only sparse but meaningful measurements being available. These data predominantly possess spatiotemporal correlations. Despite their small scale, forecasting on such data is of critical importance for understanding geoscientific processes (Saad et al., 2024; Y u et al., 2024).


Robustness in Both Domains: CLIP Needs a Robust Text Encoder

Rocamora, Elias Abad, Schlarmann, Christian, Singh, Naman Deep, Wu, Yongtao, Hein, Matthias, Cevher, Volkan

arXiv.org Artificial Intelligence

Adversarial input attacks can cause a significant shift of CLIP embeddings. This can affect the downstream robustness of models incorporating CLIP in the pipeline, such as text-to-image generative models or large vision language models. While some efforts have been done towards making the CLIP image encoders robust, the robustness of text encoders remains unexplored. In this work, we cover this gap in the literature. We propose LEAF: an efficient adversarial fine-tuning method for the text domain, with the ability to scale to large CLIP models. Our models significantly improve the zero-shot adversarial accuracy in the text domain, while maintaining the vision performance provided by robust image encoders. When combined with text-to-image diffusion models, we can improve the generation quality under adversarial noise. In multimodal retrieval tasks, LEAF improves the recall under adversarial noise over standard CLIP models. Finally, we show that robust text encoders facilitate better reconstruction of input text from its embedding via direct optimization.


VeMo: A Lightweight Data-Driven Approach to Model Vehicle Dynamics

Oddo, Girolamo, Nuca, Roberto, Parsani, Matteo

arXiv.org Artificial Intelligence

Abstract--Developing a dynamic model for a high-performance vehicle is a complex problem that requires extensive structural information about the system under analysis. This information is often unavailable to those who did not design the vehicle and represents a typical issue in autonomous driving applications, which are frequently developed on top of existing vehicles; therefore, vehicle models are developed under conditions of information scarcity. This paper proposes a lightweight encoder-decoder model based on Gate Recurrent Unit layers to correlate the vehicle's future state with its past states, measured onboard, and control actions the driver performs. The results demonstrate that the model achieves a maximum mean relative error below 2.6% in extreme dynamic conditions. It also shows good robustness when subject to noisy input data across the interested frequency components. Furthermore, being entirely data-driven and free from physical constraints, the model exhibits physical consistency in the output signals, such as longitudinal and lateral accelerations, yaw rate, and the vehicle's longitudinal velocity. N the automotive sector developing a representative vehicle dynamics model is a complex and multifaceted challenge [1]-[3]. Numerous nonlinear factors influence vehicle dynamics, including tire characteristics, suspension geometry, aerodynamics, drivetrain effects, and external environmental factors, such as road surface grip conditions and climatic effects (e.g., wind). Accurately capturing these effects in a computational model requires high-fidelity multibody simulation software and a profound understanding of the vehicle system.


Beyond hospital reach: Autonomous lightweight ultrasound robot for liver sonography

Li, Zihan, Xu, Yixiao, Zhang, Lei, Han, Taiyu, Yang, Xinshan, Wang, Yingni, Liu, Mingxuan, Xin, Shenghai, Liu, Linxun, Liao, Hongen, Ning, Guochen

arXiv.org Artificial Intelligence

These authors contributed equally to this work Abstract: Liver disease is a major global health burden. While ultrasound is the first-line diagnostic tool, liver sonography requires locating multiple non-continuous planes from positions where target structures are often not visible, for biometric assessment and lesion detection, requiring significant expertise. However, expert sonographers are severely scarce in resource-limited regions. Here, we develop an autonomous lightweight ultrasound robot comprising an AI agent that integrates multi-modal perception with memory attention for localization of unseen target structures, and a 588-gram 6-degrees-of-freedom cable-driven robot. By mounting on the abdomen, the system enhances robustness against motion. Our robot can autonomously acquire expert-level standard liver ultrasound planes and detect pathology in patients, including two from Xining, a 2261-meter-altitude city with limited medical resources. Our system performs effectively on rapid-motion individuals and in wilderness environments. This work represents the first demonstration of autonomous sonography across multiple challenging scenarios, potentially transforming access to expert-level diagnostics in underserved regions. One-Sentence Summary: The lightweight robot enables autonomous liver non-continuous standard plane sonography across multiple scenarios. Main Text: INTRODUCTION Liver disease represents a major global health burden, accounting for over two million deaths annually--approximately 4% of worldwide mortality. Cirrhosis and hepatocellular carcinoma constitute the predominant causes of liver-related fatalities. Meanwhile, parasitic infections pose additional challenges, particularly in resource-limited settings ( 1-3).


A Structured Review of Underwater Object Detection Challenges and Solutions: From Traditional to Large Vision Language Models

Nabahirwa, Edwine, Song, Wei, Zhang, Minghua, Fang, Yi, Ni, Zhou

arXiv.org Artificial Intelligence

Despite its significance, the underwater world remains largely overlooked as a result of the challenging conditions that hinder traditional research methods. Historically, the study of marine ecosystems relied on labor intensive research [1], which provided limited data and had a high error margin. In recent years, advances in autonomous and remotely operated vehicles (AUVs and ROVs) have revolutionized underwater exploration. These technologies, equipped with object detection systems, now allow real-time monitoring, which includes capturing images of marine organisms, environmental conditions, and even assessing biodiversity [2], [3]. However, the quality of images and videos captured underwater remains a significant obstacle. Light absorption, scattering, and water-related distortions, such as haze and color shifts [4], create noisy low-contrast images, further compounded by complex underwater backgrounds and camera motion. These challenges call for advanced detection techniques capable of accurately identifying and localizing objects despite underwater noise. Efficient underwater object detection (UOD) is crucial for a variety of marine applications, including biodiversity monitoring, conservation efforts, and resource management.